ContextΒΆ

One may filter different features in the neural signals. Here it is investigated which preprocessing steps are suitable in this respect.

ImportsΒΆ

In [1]:
from skimage import io
import skimage
import numpy as np
import matplotlib.pyplot as plt
from scipy.ndimage import gaussian_filter, uniform_filter
import pickle
In [2]:
import imageio
from pathlib import Path
from matplotlib.pyplot import show
from argparse import ArgumentParser

from pyoptflow import HornSchunck, getimgfiles
from pyoptflow.plots import compareGraphs
In [3]:
from PIL import Image
import os
from scipy.signal import argrelextrema
from skimage import exposure
In [4]:
import matplotlib
import matplotlib.animation
from IPython.display import HTML
matplotlib.rcParams['animation.embed_limit'] = 2**128
In [5]:
np.array(np.clip([300],0,255), dtype=np.uint8)
Out[5]:
array([255], dtype=uint8)

Import our custom utility methodsΒΆ

In [26]:
import sys
%reload_ext autoreload
%autoreload 2
sys.path.append('..')

from utils.visualization_tools import *
import utils.visualization_tools
from utils.data_transformations import *
import utils.data_transformations
from utils.diverse import *
import utils.diverse

The following modules are available

In [7]:
print_module_methods(utils.diverse)
print_module_methods(module)

In [8]:
print_module_methods(utils.visualization_tools)
display_combined(u, v, Inew, scale=100, quivstep=3, fig=None, ax=None, figsize=(10, 10), vmin=0, vmax=1)

print_points_and_background(img, x, y, point_size=10, marker='.')

In [9]:
print_module_methods(utils.data_transformations)
HornSchunck(im1: numpy.ndarray, im2: numpy.ndarray, *, alpha: float = 0.001, Niter: int = 8, verbose: bool = False) -> Tuple[numpy.ndarray, numpy.ndarray]

apply_mask(frames, mask)

clipped_adaptive(tensor, clipping=0.8)

fourier(signal, sampling_rate=100)

framewise_difference(frames, pixelwise_mean, bigdata=False)

gaussian_filter(input, sigma, order=0, output=None, mode='reflect', cval=0.0, truncate=4.0)

getimgfiles(stem: pathlib.Path, pat: str) -> list

horn_schunck(tensor, frames=None)

maxima(vector, pre_smoothing=100, minval=0)

normalize(frames)

poly_smooth_2d(coords, polynomial_degree=11, steps_per_pixel=10, epsilon=0.1)

substract_pixel_min(tensor)

uniform_filter(input, size=3, output=None, mode='reflect', cval=0.0, origin=0)

Load data and inspect a frame of the raw dataΒΆ

In [10]:
from pathlib import Path
source_folder = os.path.join(Path(os.getcwd()).parent, "source_data")
In [12]:
frames = skimage.io.imread(os.path.join(source_folder,"runstart16_X1.tif"))
In [26]:
plt.imshow(frames[0,:,:])
Out[26]:
<matplotlib.image.AxesImage at 0x7f3969b2d1d0>
In [13]:
frames.shape
Out[13]:
(12255, 270, 320)
In [14]:
frames = frames[:1000,:,:]

PreprocessingΒΆ

Here I calculate the difference from pixelwise mean as well as a smoothed version that promised to increase the signal to noise ratio.

In [15]:
mean = np.mean(frames,axis=0)#pixelwise mean
In [16]:
difference = normalize(framewise_difference(frames, mean, bigdata=False))
In [17]:
smooth = normalize(gaussian_filter(substract_pixel_min(difference), 1))
In [18]:
framewise_normalized = (np.array([normalize(frame) for frame in smooth]))
In [19]:
smoother = normalize(uniform_filter(framewise_normalized,[10,20,20]))# [20,30,30]))
In [20]:
details = framewise_normalized-smoother
In [21]:
details = np.array([normalize(frame) for frame in smooth])
In [22]:
#details = normalize(details)
In [28]:
contrast_enhanced = clipped_adaptive(details)

Image enhancement and filteringΒΆ

For visualization of the slow waves total activation

Smooth shows spreading slow wavesΒΆ

In [37]:
%%capture
fig, ax = plt.subplots(1, figsize=(10,10))

im = ax.imshow(smooth[0,:,:], vmin =.3, vmax=.5)#vmin=.25,vmax=.3)
startframe = 70
ani = matplotlib.animation.FuncAnimation(fig, lambda i: im.set_array(smooth[startframe+i]), frames=30).to_jshtml()
In [38]:
HTML(ani)
Out[38]:

Framewise normalization of smoothened tensor shows detailsΒΆ

For visualization of nuances of small scale travelling peaks in the activation by linear scaling mapping the lowest value to 0 and the highest to 1.

In [39]:
%%capture
import matplotlib.animation
from IPython.display import HTML
fig, ax = plt.subplots(1, figsize=(10,10))
def display(frame):
    global fig, ax
    ax.cla()
    im = ax.imshow(frame,vmin=0,vmax=1)#NORMALIZED FRAME HERE
    return fig, ax
startframe = 50
ani = matplotlib.animation.FuncAnimation(fig, lambda i: display(framewise_normalized[startframe+i]), frames=20).to_jshtml()
In [40]:
HTML(ani)
Out[40]:

The difference to the strongly smoothened tensor (in space and time) improves detailsΒΆ

In [43]:
%%capture
import matplotlib.animation
from IPython.display import HTML
fig, ax = plt.subplots(1, figsize=(10,10))
def display(frame):
    global fig, ax
    ax.cla()
    im = ax.imshow(frame)#,vmin=.1,vmax=1)

    return fig, ax
startframe = 50
ani = matplotlib.animation.FuncAnimation(fig, lambda i: display(details[startframe+i]), frames=20).to_jshtml()
In [44]:
HTML(ani)
Out[44]:

Adaptive histogram equalizationΒΆ

In [55]:
%%capture
import matplotlib.animation
from IPython.display import HTML
fig, ax = plt.subplots(1, figsize=(10,10))
def display(frame):
    global fig, ax
    ax.cla()
    im = ax.imshow(frame,vmin=.0,vmax=1)#NORMALIZE ROI FRAME HERE
    return fig, ax
startframe = 0
ani = matplotlib.animation.FuncAnimation(fig, lambda i: display(contrast_enhanced[startframe+i]), frames=20).to_jshtml()
In [56]:
HTML(ani)
Out[56]:
In [57]:
def sample_frame_in_roi(frame, window_size, left, right, top, bottom):   
    further_preprocessed = exposure.equalize_adapthist(normalize(frame[left:right,top:bottom]), clip_limit=0.03)
    further_preprocessed = further_preprocessed[:window_size-8,:window_size-8]
    return further_preprocessed
    
def sample_roi(tensor, start_frame, stop_frame, window_size = 60,left = 120, top = 80,):
    right = left + window_size
    bottom = top + window_size
    return np.array([sample_frame_in_roi(tensor[i], window_size, left, right, top, bottom) for i in range(start_frame,stop_frame)])
In [58]:
np.save("roi_background.npy",sample_roi(frames,0,400))
In [59]:
roi = sample_roi(details,0,400)
In [60]:
np.save("roi.npy",roi)
In [61]:
roi.shape
Out[61]:
(400, 52, 52)
In [62]:
%%capture
fig, ax = plt.subplots(1, figsize=(10,10))

im = ax.imshow(smooth[0,:,:], vmin =.6, vmax=.8)
startframe = 0
ani = matplotlib.animation.FuncAnimation(fig, lambda i: im.set_array(roi[startframe+i]), frames=20).to_jshtml()
In [63]:
open("anim.html","w").write(ani)
Out[63]:
437396
In [64]:
HTML(ani)
Out[64]:

There are two ways to look at this:

- Either there are physical clusters of neurons and they talk to each other
- Or there are liquid informational codes and they travel

Are there signs for systematic noise or artifacts?ΒΆ

In [65]:
upper_decentile_roi = np.array([np.quantile(f,0.9) for f in roi])
In [66]:
signal = upper_decentile_roi - gaussian_filter(upper_decentile_roi, 5)
x, freq = fourier(signal)
fig, ax = plt.subplots(2)
ax[0].plot(signal-np.mean(signal))
ax[1].plot(x,freq)
Out[66]:
[<matplotlib.lines.Line2D at 0x7f47b18da350>]

Fourier plot does not indicate any dominant frequencies

Horn and Schunck dense optical flowΒΆ

In [67]:
x_comp, y_comp = horn_schunck(contrast_enhanced, 200)
........................................................................................................................................................................................................
In [70]:
%%capture
fig, ax = display_combined(x_comp[0],y_comp[0], details[1])
start = 25
frames = 10

def animate(i):
    global start
    i += start
    print(".", end ="")    
    display_combined(x_comp[i]/10,y_comp[i]/10, details[i+1], fig=fig, ax=ax)
    #Q.set_UVC(np.flipud(rescaled[:,:,0]), -np.flipud(rescaled[:,:,1]))

ani = matplotlib.animation.FuncAnimation(fig, animate, frames=frames)
In [71]:
from IPython.display import HTML
HTML(ani.to_jshtml())
...........
Out[71]:
In [72]:
roi = sample_roi(details,0,100)
In [73]:
x_comp, y_comp = horn_schunck(roi, len(roi)-1)
...................................................................................................
In [76]:
%%capture
fig, ax = display_combined(x_comp[0],y_comp[0], details[1])
start = 0
frames = 10

def animate(i):
    i += start
    print(".", end ="")    
    display_combined(x_comp[i]/5,y_comp[i]/5, roi[i+1], fig=fig, ax=ax, scale=10, quivstep=1)
    #Q.set_UVC(np.flipud(rescaled[:,:,0]), -np.flipud(rescaled[:,:,1]))

ani = matplotlib.animation.FuncAnimation(fig, animate, frames=frames)
In [77]:
HTML(ani.to_jshtml())
...........
Out[77]:

ConclusionΒΆ

One can filter small scale motion patterns and largescale dynamics. The big size of the data represents a challange becuase of working memory restrictions when using NumPy methods directly. Custom methods can help to reduce the memory requirements. Developing scripts that run in a computational grid on computers with large memory capacities could also help.